Jeff Chang, Myriam Hamed Torres, and Jason Baldridge from the Google DeepMind team join host Logan Kilpatrick for a deep dive into Lyria 3, Google’s latest music generation model. Their conversation explores the transition from simple audio generation to a model that acts as a collaborative instrument, providing creators with fine-grained control over mood, instrumentation, and vocals. Learn more about the technical challenges of prompt adherence in music, the importance of "vibe" in human evaluations, and the future of layered, iterative music composition.
Chapters:
0:00 - Intro
1:00 - Defining music generation models
1:40 - Lyria as a new instrument
3:05 - Connecting language and creative intent
5:08 - Guest backgrounds and musical journeys
7:57 - Demo: Instrumental funk jam
8:29 - Bridging the gap for non-musicians
12:03 - Demo: Exploring lyrics and vocals
15:07 - The magic of iterative co-creation
15:40 - Meeting users across the expertise spectrum
17:01 - Empowering new musical expressions
18:29 - Emotional and communal impact of music
19:51 - Opportunities for developers and community
21:09 - Real-time vs. song generation models
23:23 - Creating experimental sonic landscapes
25:08 - Demo: Capturing unexpectedness and energy
28:33 - Evaluating music through taste and expertise
31:30 - The diligence of music evaluation
31:52 - The future of Lyria and AI-first workflows
35:07 - Articulating creative vision through language
Listen to this podcast:
Apple Podcasts →
S
|
Jeff Chang, Myriam Hamed Torres, and Jas...
*FREE Accessibility Checklist* - _80+ it...
Welcome to Google Developer News, the Fe...
🔥PGP in Generative AI and ML in collabor...
🔥 Explore Amazing Courses By Simplilearn...
🔥Data Analyst Masters Program (Discount ...
🔥PMP® Certification Training - 🔥PRINC...
Knitting and coding may not seem related...
We’ve all been there… sitting in an appr...
n8n turns automation skills into income ...